66 research outputs found

    Enabling Reliable, Efficient, and Secure Computing for Energy Harvesting Powered IoT Devices

    Get PDF
    Energy harvesting is one of the most promising techniques to power devices for future generation IoT. While energy harvesting does not have longevity, safety, and recharging concerns like traditional batteries, its instability brings a new challenge to the embedded systems: the energy harvested from environment is usually weak and intermittent. With traditional CMOS based technology, whenever the power is off, the computation has to start from the very beginning. Compared with existing CMOS based memory devices, emerging non-volatile memory devices such as PCM and STT-RAM, have the benefits of sustaining the data even when there is no power. By checkpointing the processor's volatile state to non-volatile memory, a program can resume its execution immediately after power comes back on again instead of restarting from the very beginning with checkpointing techniques. However, checkpointing is not sufficient for energy harvesting systems. First, the program execution resumed from the last checkpoint might not execute correctly and causes inconsistency problem to the system. This problem is due to the inconsistency between volatile system state and non-volatile system state during checkpointing. Second, the process of checkpointing consumes a considerable amount of energy and time due to the slow and energy-consuming write operation of non-volatile memory. Finally, connecting to the internet poses many security issues to energy harvesting IoT devices. Traditional data encryption methods are both energy and time consuming which do not fit the resource constrained IoT devices. Therefore, a light-weight encryption method is in urgent need for securing IoT devices. Targeting those three challenges, this dissertation proposes three techniques to enable reliable, efficient, and secure computing in energy harvesting IoT devices. First, a consistency-aware checkpointing technique is proposed to avoid inconsistency errors generated from the inconsistency between volatile state and non-volatile state. Second, checkpoint aware hybrid cache architecture is proposed to guarantee reliable checkpointing while maintaining a low checkpointing overhead from cache. Finally, to ensure the security of energy harvesting IoT devices, an energy-efficient in-memory encryption implementation for protecting the IoT device is proposed which can quickly encrypts the data in non-volatile memory and protect the embedded system physical and on-line attacks

    Research on Embedded Sensors for Concrete Health Monitoring Based on Ultrasonic Testing

    Get PDF
    In this article, embedded ultrasonic sensors were prepared using 1–3-type piezoelectric composite and piezoelectric ceramic as the piezoelectric elements, respectively. The frequency bandwidth of the novel embedded ultrasonic sensors was investigated. To obtain the relationship between the receiving ultrasonic velocity and compressive strength, as well as their response signals to crack damage, the sensors were fabricated and embedded into the cement mortar before testing. The results demonstrated that the piezoelectric composite sensor had wider frequency bandwidth than the piezoelectric ceramic sensor. The compressive strength and ultrasonic velocity had a positive linear relationship, with a correlation coefficient of 0.9216. The head wave amplitude of the receiving ultrasonic signal was sensitive to the changing crack damage and gradually decayed with the increasing degree of cement damage. Thus, the novel embedded ultrasonic sensors are suitable for concrete health monitoring via ultrasonic non-destructive testing

    Efficient and Direct Inference of Heart Rate Variability using Both Signal Processing and Machine Learning

    Full text link
    Heart Rate Variability (HRV) measures the variation of the time between consecutive heartbeats and is a major indicator of physical and mental health. Recent research has demonstrated that photoplethysmography (PPG) sensors can be used to infer HRV. However, many prior studies had high errors because they only employed signal processing or machine learning (ML), or because they indirectly inferred HRV, or because there lacks large training datasets. Many prior studies may also require large ML models. The low accuracy and large model sizes limit their applications to small embedded devices and potential future use in healthcare. To address the above issues, we first collected a large dataset of PPG signals and HRV ground truth. With this dataset, we developed HRV models that combine signal processing and ML to directly infer HRV. Evaluation results show that our method had errors between 3.5% to 25.7% and outperformed signal-processing-only and ML-only methods. We also explored different ML models, which showed that Decision Trees and Multi-level Perceptrons have 13.0% and 9.1% errors on average with models at most hundreds of KB and inference time less than 1ms. Hence, they are more suitable for small embedded devices and potentially enable the future use of PPG-based HRV monitoring in healthcare

    PPG-based Heart Rate Estimation with Efficient Sensor Sampling and Learning Models

    Full text link
    Recent studies showed that Photoplethysmography (PPG) sensors embedded in wearable devices can estimate heart rate (HR) with high accuracy. However, despite of prior research efforts, applying PPG sensor based HR estimation to embedded devices still faces challenges due to the energy-intensive high-frequency PPG sampling and the resource-intensive machine-learning models. In this work, we aim to explore HR estimation techniques that are more suitable for lower-power and resource-constrained embedded devices. More specifically, we seek to design techniques that could provide high-accuracy HR estimation with low-frequency PPG sampling, small model size, and fast inference time. First, we show that by combining signal processing and ML, it is possible to reduce the PPG sampling frequency from 125 Hz to only 25 Hz while providing higher HR estimation accuracy. This combination also helps to reduce the ML model feature size, leading to smaller models. Additionally, we present a comprehensive analysis on different ML models and feature sizes to compare their accuracy, model size, and inference time. The models explored include Decision Tree (DT), Random Forest (RF), K-nearest neighbor (KNN), Support vector machines (SVM), and Multi-layer perceptron (MLP). Experiments were conducted using both a widely-utilized dataset and our self-collected dataset. The experimental results show that our method by combining signal processing and ML had only 5% error for HR estimation using low-frequency PPG data. Moreover, our analysis showed that DT models with 10 to 20 input features usually have good accuracy, while are several magnitude smaller in model sizes and faster in inference time

    Dynamic Sparse Training via Balancing the Exploration-Exploitation Trade-off

    Full text link
    Over-parameterization of deep neural networks (DNNs) has shown high prediction accuracy for many applications. Although effective, the large number of parameters hinders its popularity on resource-limited devices and has an outsize environmental impact. Sparse training (using a fixed number of nonzero weights in each iteration) could significantly mitigate the training costs by reducing the model size. However, existing sparse training methods mainly use either random-based or greedy-based drop-and-grow strategies, resulting in local minimal and low accuracy. In this work, we consider the dynamic sparse training as a sparse connectivity search problem and design an exploitation and exploration acquisition function to escape from local optima and saddle points. We further design an acquisition function and provide the theoretical guarantees for the proposed method and clarify its convergence property. Experimental results show that sparse models (up to 98\% sparsity) obtained by our proposed method outperform the SOTA sparse training methods on a wide variety of deep learning tasks. On VGG-19 / CIFAR-100, ResNet-50 / CIFAR-10, ResNet-50 / CIFAR-100, our method has even higher accuracy than dense models. On ResNet-50 / ImageNet, the proposed method has up to 8.2\% accuracy improvement compared to SOTA sparse training methods

    EVE: Environmental Adaptive Neural Network Models for Low-power Energy Harvesting System

    Full text link
    IoT devices are increasingly being implemented with neural network models to enable smart applications. Energy harvesting (EH) technology that harvests energy from ambient environment is a promising alternative to batteries for powering those devices due to the low maintenance cost and wide availability of the energy sources. However, the power provided by the energy harvester is low and has an intrinsic drawback of instability since it varies with the ambient environment. This paper proposes EVE, an automated machine learning (autoML) co-exploration framework to search for desired multi-models with shared weights for energy harvesting IoT devices. Those shared models incur significantly reduced memory footprint with different levels of model sparsity, latency, and accuracy to adapt to the environmental changes. An efficient on-device implementation architecture is further developed to efficiently execute each model on device. A run-time model extraction algorithm is proposed that retrieves individual model with negligible overhead when a specific model mode is triggered. Experimental results show that the neural networks models generated by EVE is on average 2.5X times faster than the baseline models without pruning and shared weights

    A Bi-Level Weibull Model with Applications to Two Ordered Events

    Get PDF
    In this paper, we propose and study a new bivariate Weibull model, called Bi-levelWeibullModel, which arises when one failure occurs after the other. Under some specific regularity conditions, the reliability function of the second event can be above the reliability function of the first event, and is always above the reliability function of the transformed first event, which is a univariate Weibull random variable. This model is motivated by a common physical feature that arises fromseveral real applications. The two marginal distributions are a Weibull distribution and a generalized three-parameter Weibull mixture distribution. Some useful properties of the model are derived, and we also present the maximum likelihood estimation method. A real example is provided to illustrate the application of the model

    Finishing the euchromatic sequence of the human genome

    Get PDF
    The sequence of the human genome encodes the genetic instructions for human physiology, as well as rich information about human evolution. In 2001, the International Human Genome Sequencing Consortium reported a draft sequence of the euchromatic portion of the human genome. Since then, the international collaboration has worked to convert this draft into a genome sequence with high accuracy and nearly complete coverage. Here, we report the result of this finishing process. The current genome sequence (Build 35) contains 2.85 billion nucleotides interrupted by only 341 gaps. It covers ∼99% of the euchromatic genome and is accurate to an error rate of ∼1 event per 100,000 bases. Many of the remaining euchromatic gaps are associated with segmental duplications and will require focused work with new methods. The near-complete sequence, the first for a vertebrate, greatly improves the precision of biological analyses of the human genome including studies of gene number, birth and death. Notably, the human enome seems to encode only 20,000-25,000 protein-coding genes. The genome sequence reported here should serve as a firm foundation for biomedical research in the decades ahead

    A Bivariate Maintenance Policy for Multi-State Repairable Systems with Monotone Process

    No full text
    10.1109/TR.2013.2285042In this paper, a sequential failure limit maintenance policy for a repairable system is studied. The system is assumed to have states, including one working state and failure states, and the multiple failure states are classified by, e.g., failure severity or failure cause. The system will be replaced at the th failure and corrective maintenance is conducted immediately at each of the first failures. A reliability-centered preventive maintenance schedule is proposed in which, between two adjacent failures, a preventive maintenance action is taken as soon as the system reliability drops to a critical reliability . Both preventive maintenance and corrective maintenance are assumed to be imperfect. Increasing and decreasing geometric processes are introduced to characterize the efficiency of these two types of maintenance. The objective is to derive an optimal maintenance policy such that the long-run expected cost per unit time is minimized. The explicit expression of the average cost rate is derived, and the corresponding optimal maintenance policy can be determined analytically or numerically. A numerical example is given to illustrate the theoretical results and the procedure. The decision model shows its adaptability to different possible characteristics of the maintained system
    • …
    corecore